425 research outputs found

    An Energy Balanced Dynamic Topology Control Algorithm for Improved Network Lifetime

    Full text link
    In wireless sensor networks, a few sensor nodes end up being vulnerable to potentially rapid depletion of the battery reserves due to either their central location or just the traffic patterns generated by the application. Traditional energy management strategies, such as those which use topology control algorithms, reduce the energy consumed at each node to the minimum necessary. In this paper, we use a different approach that balances the energy consumption at each of the nodes, thus increasing the functional lifetime of the network. We propose a new distributed dynamic topology control algorithm called Energy Balanced Topology Control (EBTC) which considers the actual energy consumed for each transmission and reception to achieve the goal of an increased functional lifetime. We analyze the algorithm's computational and communication complexity and show that it is equivalent or lower in complexity to other dynamic topology control algorithms. Using an empirical model of energy consumption, we show that the EBTC algorithm increases the lifetime of a wireless sensor network by over 40% compared to the best of previously known algorithms

    Fundamental Limits in Multimedia Forensics and Anti-forensics

    Get PDF
    As the use of multimedia editing tools increases, people become questioning the authenticity of multimedia content. This is specially a big concern for authorities, such as law enforcement, news reporter and government, who constantly use multimedia evidence to make critical decisions. To verify the authenticity of multimedia content, many forensic techniques have been proposed to identify the processing history of multimedia content under question. However, as new technologies emerge and more complicated scenarios are considered, the limitation of multimedia forensics has been gradually realized by forensic researchers. It is the inevitable trend in multimedia forensics to explore the fundamental limits. In this dissertation, we propose several theoretical frameworks to study the fundamental limits in various forensic problems. Specifically, we begin by developing empirical forensic techniques to deal with the limitation of existing techniques due to the emergence of new technology, compressive sensing. Then, we go one step further to explore the fundamental limit of forensic performance. Two types of forensic problems have been examined. In operation forensics, we propose an information theoretical framework and define forensicability as the maximum information features contain about hypotheses of processing histories. Based on this framework, we have found the maximum number of JPEG compressions one can detect. In order forensics, an information theoretical criterion is proposed to determine when we can and cannot detect the order of manipulation operations that have been applied on multimedia content. Additionally, we have examined the fundamental tradeoffs in multimedia antiforensics, where attacking techniques are developed by forgers to conceal manipulation fingerprints and confuse forensic investigations. In this field, we have defined concealability as the effectiveness of anti-forensics concealing manipulation fingerprints. Then, a tradeoff between concealability, rate and distortion is proposed and characterized for compression anti-forensics, which provides us valuable insights of how forgers may behave under their best strategy

    Upper Bounds to the Capacity of Wireless Networks

    Get PDF
    In this thesis, I mainly focus on the evaluation of the upper bounds to the capacity of wireless networks. With the consideration of the two measures, the maximal transmission rate for any source-destination pair and the transport capacity of wireless networks, I summarize the most recent results to the upper bounds of these two measures first in this thesis. At the same time, I also improve and modify the previous results given in these papers. Moreover, I present a proof to the upper bound of maximal transmission rate with high probability by taking the fading of the channel into account when the full CSI is only known to the receivers. With a simple extension of the result, I derive an upper bound to the transport capacity of wireless networks without full CSI at the receiver side. A linear scaling of the upper bound to transport capacity is also derived when the path loss exponent is greater than three. Compared with the previous results, it is shown that the upper bound given in this thesis is much better for relatively large alpha and a minimum distance constraint

    Distributed algorithms for extending the functional lifetime of wireless sensor networks

    Get PDF
    The functional lifetime of a wireless sensor network (WSN) is among its most important features and serves as an essential metric in the evaluation of its energy-conserving policies. Approaches for extending the lifetime of a wireless sensor node include using an on/off strategy on the sensor nodes and using a topology control algorithm on each node to regulate its transmission power. However, the need to keep the network functional imposes certain additional constraints on strategies for energy conservation. A sensing constraint imposes that the sensing tasks essential to the functionality of the WSN are not compromised. A communication constraint similarly imposes that communications essential to an application on the network remain possible even as battery resources deplete on the nodes. This dissertation presents new distributed algorithms for energy conservation under these two classes of constraints: sensing constraints and communication constraints. One sensing constraint, called the representation constraint in this dissertation, is the requirement that active (on) sensor nodes are evenly distributed in the region of interest covered by the sensor network. This dissertation develops two essential metrics which together allow a rigorous quantitative assessment of the quality of representation achieved by a WSN and presents analytical results which bound these metrics in the common scenario of a planar region of arbitrary shape covered by a sensor network deployment. The dissertation further proposes a new distributed algorithm for energy conservation under the representation constraint. Simulation results show that the proposed algorithm is able to significantly improve the quality of representation compared to other related distributed algorithms. It also shows that improved spatial uniformity has the welcome side-effect of a significant increase in the functional lifetime of a WSN. One communication constraint, called the connectivity constraint, imposes that the network remains connected during its functional life. The connectivity required may be weak (allowing unidirectional communication between nodes) or strong (requiring bidirectional link layer communication between each pair of communicating nodes). This dissertation develops new distributed topology control algorithms for energy conservation under both the strong and the weak connectivity constraint. The proposed algorithm for the more ideal scenario of the weak connectivity constraint uses a game-theoretic approach. The dissertation proves the existence of a Nash equilibrium for the game and computes the associated price of anarchy. Simulation results show that the algorithms extend the network lifetime beyond those achieved by previously known algorithms.Ph.D., Computer engineering -- Drexel University, 201

    Predicting changes in protein thermostability brought about by single- or multi-site mutations

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>An important aspect of protein design is the ability to predict changes in protein thermostability arising from single- or multi-site mutations. Protein thermostability is reflected in the change in free energy (ΔΔ<it>G</it>) of thermal denaturation.</p> <p>Results</p> <p>We have developed predictive software, Prethermut, based on machine learning methods, to predict the effect of single- or multi-site mutations on protein thermostability. The input vector of Prethermut is based on known structural changes and empirical measurements of changes in potential energy due to protein mutations. Using a 10-fold cross validation test on the M-dataset, consisting of 3366 mutants proteins from ProTherm, the classification accuracy of random forests and the regression accuracy of random forest regression were slightly better than support vector machines and support vector regression, whereas the overall accuracy of classification and the Pearson correlation coefficient of regression were 79.2% and 0.72, respectively. Prethermut performs better on proteins containing multi-site mutations than those with single mutations.</p> <p>Conclusions</p> <p>The performance of Prethermut indicates that it is a useful tool for predicting changes in protein thermostability brought about by single- or multi-site mutations and will be valuable in the rational design of proteins.</p

    Identification of the para-nitrophenol catabolic pathway, and characterization of three enzymes involved in the hydroquinone pathway, in pseudomonas sp. 1-7

    Get PDF
    <p>Abstract</p> <p>Background</p> <p><it>para</it>-Nitrophenol (PNP), a priority environmental pollutant, is hazardous to humans and animals. However, the information relating to the PNP degradation pathways and their enzymes remain limited.</p> <p>Results</p> <p><it>Pseudomonas </it>sp.1-7 was isolated from methyl parathion (MP)-polluted activated sludge and was shown to degrade PNP. Two different intermediates, hydroquinone (HQ) and 4-nitrocatechol (4-NC) were detected in the catabolism of PNP. This indicated that <it>Pseudomonas </it>sp.1-7 degraded PNP by two different pathways, namely the HQ pathway, and the hydroxyquinol (BT) pathway (also referred to as the 4-NC pathway). A gene cluster (<it>pdcEDGFCBA</it>) was identified in a 10.6 kb DNA fragment of a fosmid library, which cluster encoded the following enzymes involved in PNP degradation: PNP 4-monooxygenase (PdcA), <it>p</it>-benzoquinone (BQ) reductase (PdcB), hydroxyquinol (BT) 1,2-dioxygenase (PdcC), maleylacetate (MA) reductase (PdcF), 4-hydroxymuconic semialdehyde (4-HS) dehydrogenase (PdcG), and hydroquinone (HQ) 1,2-dioxygenase (PdcDE). Four genes (<it>pdcDEFG</it>) were expressed in <it>E. coli </it>and the purified <it>pdcDE</it>, <it>pdcG </it>and <it>pdcF </it>gene products were shown to convert HQ to 4-HS, 4-HS to MA and MA to β-ketoadipate respectively by <it>in vitro </it>activity assays.</p> <p>Conclusions</p> <p>The cloning, sequencing, and characterization of these genes along with the functional PNP degradation studies identified 4-NC, HQ, 4-HS, and MA as intermediates in the degradation pathway of PNP by <it>Pseudomonas </it>sp.1-7. This is the first conclusive report for both 4-NC and HQ- mediated degradation of PNP by one microorganism.</p

    Reliable and Efficient In-Memory Fault Tolerance of Large Language Model Pretraining

    Full text link
    Extensive system scales (i.e. thousands of GPU/TPUs) and prolonged training periods (i.e. months of pretraining) significantly escalate the probability of failures when training large language models (LLMs). Thus, efficient and reliable fault-tolerance methods are in urgent need. Checkpointing is the primary fault-tolerance method to periodically save parameter snapshots from GPU memory to disks via CPU memory. In this paper, we identify the frequency of existing checkpoint-based fault-tolerance being significantly limited by the storage I/O overheads, which results in hefty re-training costs on restarting from the nearest checkpoint. In response to this gap, we introduce an in-memory fault-tolerance framework for large-scale LLM pretraining. The framework boosts the efficiency and reliability of fault tolerance from three aspects: (1) Reduced Data Transfer and I/O: By asynchronously caching parameters, i.e., sharded model parameters, optimizer states, and RNG states, to CPU volatile memory, Our framework significantly reduces communication costs and bypasses checkpoint I/O. (2) Enhanced System Reliability: Our framework enhances parameter protection with a two-layer hierarchy: snapshot management processes (SMPs) safeguard against software failures, together with Erasure Coding (EC) protecting against node failures. This double-layered protection greatly improves the survival probability of the parameters compared to existing checkpointing methods. (3) Improved Snapshotting Frequency: Our framework achieves more frequent snapshotting compared with asynchronous checkpointing optimizations under the same saving time budget, which improves the fault tolerance efficiency. Empirical results demonstrate that Our framework minimizes the overhead of fault tolerance of LLM pretraining by effectively leveraging redundant CPU resources.Comment: Fault Tolerance, Checkpoint Optimization, Large Language Model, 3D parallelis
    corecore